Goto

Collaborating Authors

 implicit regularization




c0c783b5fc0d7d808f1d14a6e9c8280d-Paper.pdf

Neural Information Processing Systems

A major hurdle in this study is that implicit regularization in deep learning seems to kick in only withcertain types ofdata(notwithrandom dataforexample), andwelackmathematical tools for reasoning about real-life data. Thus one needs a simple test-bed for the investigation, where data admits a crisp mathematical formulation. Following earlier works, we focus on the problem of matrix completion: given a randomly chosen subset of entries from an unknown matrixW, the taskistorecovertheunseen entries. Tocastthisasaprediction problem, wemayvieweach entry inW as a data point: observed entries constitute the training set, and the average reconstruction error over the unobserved entries is the test error,quantifying generalization. Fitting the observed entries is obviously an underdetermined problem with multiple solutions.





Learning Overparameterized Neural Networks via Stochastic Gradient Descent on Structured Data

Yuanzhi Li, Yingyu Liang

Neural Information Processing Systems

Neural networks have many successful applications, while much less theoretical understanding has been gained. Towards bridging this gap, we study the problem of learning a two-layer overparameterized ReLU neural network for multi-class classification via stochastic gradient descent (SGD) from random initialization. In the overparameterized setting, when the data comes from mixtures of well-separated distributions, we prove that SGD learns a network with a small generalization error, albeit the network has enough capacity to fit arbitrary labels. Furthermore, the analysis provides interesting insights into several aspects of learning neural networks and can be verified based on empirical studies on synthetic data and on the MNIST dataset.